perm filename TIMNG.REP[TIM,LSP] blob sn#575630 filedate 1981-03-30 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002			       LISP Timing Progress Report
C00008 ENDMK
CāŠ—;
		       LISP Timing Progress Report
			       R.P. Gabriel

The LISP Timing Evaluation project began at the end of February and
is slowly taking shape. Originally simply a timing project, other
evaluations are being considered. Since there are a large number of
systems and people involved in the project there has been nothing
in the way of results at the moment, it being in the organizational
stage yet.

The idea of the project is to provide both objective and subjective
bases for rational choice among the various LISPs and LISP systems
available today or to be available in the near future. The objective
measures are to be provided through the use of benchmarks which will
be run on the various systems in the test with measurements made
in terms of CPU time. These benchmarks will be/are being provided
by people at the various sites in order to provide a range of
interesting benchmarks, not merely a few artificial ones. 

The subjective measures are descriptions of the systems provided
by the volunteers along with experiences with the translating the
various benchmarks: since the benchmarks are not restricted in any
way a translation phase for each site is required. The tools and
problems associated with each translation (with the original and
translated programs as evidence) can be interpreted as a measure
of the expressive power of a given LISP/LISP system.

A final measure of the non-language efficiency will be attempted,
though there will be some technical problems here. What is meant is
the garbage collection and system paging overhead time. 

The following is a list of the systems to be tested as known at this point:

Interlisp on MAX, Dolphin, Dorado
MacLisp on SAIL
NIL on S-1
InterLisp on SUMEX
UCILISP on Rutgers
SpiceLisp on PERQ
Lisp Machine (Symbolics, CADR)
Maclisp on AI, MC, NIL on VAX
InterLisp on F2
Standard Lisp on TOPS-10, B-1700 
LISP370
TLC-lisp on Z-80
muLisp on Z-80
Muddle on DMS
Rutgers Lisp
Multics MacLisp
Jericho InterLisp
Cromemco Lisp on Z80
Franz Lisp on VAX UNIX
UTILISP

At this point only about 5 benchmarks fave been proposed, and I fear that
I will need to propose a number of them, though I hope to get the volunteers
to code them up for me. At present I hope to have the following types of benchmarks:

	Array reference and storage (random access)
	Array reference and storage (matrix access)
	Array reference and storage (matrix inversion)
	Short list structure (records, hunks...)
	Long list structure (cdr access)
	CAR heavy structures
	CDR heavy structures
	Interpreted function calls
	Compiled function calls
	Smashed function calls
	Table function calls (FUNCALL, SUBRCALL)
	Tail recursion (?)
	Block compiling
	Reader speed
	Property list structures
	Atom structures (saturated obarrays)
	Internal loops
	Trigonometric functions
	Arithmetic (floating and fixed)
	Special variable lookup
	Local variable lookup
	CONS time
	GC time
	Compiled code load time
	EQ test time
	Arithmetic predicates
	Type determination

The time scale, given the current voluntary nature and the broad extent
of the project is approximately 1 year (March 1982) for the final report
with most of the benchmarks timed by September 1981.